Image-based head swapping task aims to stitch a source head to another source body flawlessly. This seldom-studied task faces two major challenges: 1) Preserving the head and body from various sources while generating a seamless transition region. 2) No paired head swapping dataset and benchmark so far. In this paper, we propose an image-based head swapping framework (HS-Diffusion) which consists of a semantic-guided latent diffusion model (SG-LDM) and a semantic layout generator. We blend the semantic layouts of source head and source body, and then inpaint the transition region by the semantic layout generator, achieving a coarse-grained head swapping. SG-LDM can further implement fine-grained head swapping with the blended layout as condition by a progressive fusion process, while preserving source head and source body with high-quality reconstruction. To this end, we design a head-cover augmentation strategy for training and a neck alignment trick for geometric realism. Importantly, we construct a new image-based head swapping benchmark and propose two tailor-designed metrics (Mask-FID and Focal-FID). Extensive experiments demonstrate the superiority of our framework. The code will be available: https://github.com/qinghew/HS-Diffusion.
translated by 谷歌翻译
Speech representation learning has improved both speech understanding and speech synthesis tasks for single language. However, its ability in cross-lingual scenarios has not been explored. In this paper, we extend the pretraining method for cross-lingual multi-speaker speech synthesis tasks, including cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing. We propose a speech-text joint pretraining framework, where we randomly mask the spectrogram and the phonemes given a speech example and its transcription. By learning to reconstruct the masked parts of the input in different languages, our model shows great improvements over speaker-embedding-based multi-speaker TTS methods. Moreover, our framework is end-to-end for both the training and the inference without any finetuning effort. In cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing tasks, our experiments show that our model outperforms speaker-embedding-based multi-speaker TTS methods. The code and model are publicly available at PaddleSpeech.
translated by 谷歌翻译
手指静脉识别是一种新兴的生物识别识别技术。与人体表面上的其他生物特征不同,手指的静脉血管组织被埋在皮肤深处。由于这种优势,手指静脉识别是高度稳定和私人的。它们几乎不可能被外部条件偷走且难以干预。与基于传统机器学习的手指静脉识别方法不同,人工神经网络技术,尤其是深度学习,它不依赖功能工程并具有出色的性能。为了总结基于人工神经网络的手指静脉识别的发展,本文收集了149篇相关论文。首先,我们介绍了手指静脉识别的背景和这项调查的动机。然后,引入了人工神经网络的发展历史和手指静脉识别任务上的代表网络。然后描述在手指静脉识别中广泛使用的公共数据集。之后,我们分别基于经典神经网络和深层神经网络总结了相关的手指静脉识别任务。最后,讨论了手指静脉识别的挑战和潜在发展方向。据我们所知,本文是第一次综合调查,重点是基于人工神经网络的指静脉识别。
translated by 谷歌翻译
作为一种新颖的深度学习模型,GCFOREST已被广泛用于各种应用中。但是,当前的GCFOREST多透明扫描会产生许多冗余特征向量,这增加了模型的时间成本。为了筛选冗余特征向量,我们引入了一种用于多透明扫描的哈希筛选机制,并提出了一种称为HW-Forest的模型,该模型采用了两种策略,即哈希筛选和窗口筛选。 HW-Forest采用感知散列算法来计算哈希筛选策略中特征向量之间的相似性,该策略用于删除由多透明扫描产生的冗余特征向量,并可以大大降低时间成本和记忆消耗。此外,我们采用了一种自适应实例筛选策略来提高我们的方法的性能,称为窗口筛选,可以实现更高的精度,而无需在不同数据集上进行超参数调整。我们的实验结果表明,HW-Forest的精度比其他模型更高,并且时间成本也降低。
translated by 谷歌翻译
图表卷积网络(GCN)显示了探索图形表示的显着潜力。然而,GCN聚合机制无法通过异常概括到网络上的网络,其中大多数节点具有来自不同类别的邻居,该邻居通常存在于现实网络中。为了使GCN的传播和聚合机制适合于粗源性和异常的(甚至它们的混合物),我们将块建模引入GCN的框架,以便它可以实现“块导向的分类聚合”,并自动学习不同类别邻居的相应聚合规则。通过将块建模掺入聚合过程中,GCN能够根据其同音程度判别歧视来自同性恋和异交邻居的信息。我们将我们的算法与最先进的方法进行了比较了异证问题。经验结果证明了我们在异交数据集中现有方法的新方法的优越性,同时在同性恋数据集中保持竞争性能。
translated by 谷歌翻译
在本文中,我们提出了一种基于深度学习的模型来检测北半球的乌斯多利飓风(ETCS),同时开发一种处理图像的新颖工作流程并为ETCS产生标签。我们首先通过从Bonfanti et.al调整一种方法来标记旋风中心。 [1]并建立三类标签等标准:发展,成熟和下降阶段。然后,我们提出了一个标签和预处理数据集中的图像的框架。一旦图像和标签准备好用作输入,我们创建了指定单拍摄检测器(SSD)的对象检测模型以适应我们数据集的格式。我们用两个设置(二进制和多字符分类)的标签数据集培训并评估我们的模型,同时保留结果记录。最后,我们实现了较高的性能,检测成熟阶段(平均平均精度为86.64%),以及检测所有三类的等等的可接受结果(平均平均精度79.34%)。我们得出结论,单次探测器模型可以成功地检测不同阶段的等等,并且在其他相关设置中的ETC检测的未来应用中表现出很大的潜力。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译